Multimodal Emotion Recognition


Multimodal emotion recognition is the process of recognizing emotions from multiple modalities, such as speech, text, and facial expressions.

VAEmo: Efficient Representation Learning for Visual-Audio Emotion with Knowledge Injection

Add code
May 05, 2025
Viaarxiv icon

Emotions in the Loop: A Survey of Affective Computing for Emotional Support

Add code
May 02, 2025
Viaarxiv icon

DEEMO: De-identity Multimodal Emotion Recognition and Reasoning

Add code
Apr 28, 2025
Viaarxiv icon

AffectEval: A Modular and Customizable Framework for Affective Computing

Add code
Apr 29, 2025
Viaarxiv icon

A Survey on Multimodal Music Emotion Recognition

Add code
Apr 26, 2025
Viaarxiv icon

Towards Robust Multimodal Physiological Foundation Models: Handling Arbitrary Missing Modalities

Add code
Apr 28, 2025
Viaarxiv icon

PsyCounAssist: A Full-Cycle AI-Powered Psychological Counseling Assistant System

Add code
Apr 23, 2025
Viaarxiv icon

PhysioSync: Temporal and Cross-Modal Contrastive Learning Inspired by Physiological Synchronization for EEG-Based Emotion Recognition

Add code
Apr 24, 2025
Viaarxiv icon

Leveraging Label Potential for Enhanced Multimodal Emotion Recognition

Add code
Apr 07, 2025
Viaarxiv icon

Multimodal Representation Learning Techniques for Comprehensive Facial State Analysis

Add code
Apr 14, 2025
Viaarxiv icon